Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Despite the impact of psychiatric disorders on clinical health, early-stage diagnosis remains a challenge. Machine learning studies have shown that classifiers tend to be overly narrow in the diagnosis prediction task. The overlap between conditions leads to high heterogeneity among participants that is not adequately captured by classification models. To address this issue, normative approaches have surged as an alternative method. By using a generative model to learn the distribution of healthy brain data patterns, we can identify the presence of pathologies as deviations or outliers from the distribution learned by the model. In particular, deep generative models showed great results as normative models to identify neurological lesions in the brain. However, unlike most neurological lesions, psychiatric disorders present subtle changes widespread in several brain regions, making these alterations challenging to identify. In this work, we evaluate the performance of transformer-based normative models to detect subtle brain changes expressed in adolescents and young adults. We trained our model on 3D MRI scans of neurotypical individuals (N=1,765). Then, we obtained the likelihood of neurotypical controls and psychiatric patients with early-stage schizophrenia from an independent dataset (N=93) from the Human Connectome Project. Using the predicted likelihood of the scans as a proxy for a normative score, we obtained an AUROC of 0.82 when assessing the difference between controls and individuals with early-stage schizophrenia. Our approach surpassed recent normative methods based on brain age and Gaussian Process, showing the promising use of deep generative models to help in individualised analyses.
translated by 谷歌翻译
Out-of-distribution detection is crucial to the safe deployment of machine learning systems. Currently, the state-of-the-art in unsupervised out-of-distribution detection is dominated by generative-based approaches that make use of estimates of the likelihood or other measurements from a generative model. Reconstruction-based methods offer an alternative approach, in which a measure of reconstruction error is used to determine if a sample is out-of-distribution. However, reconstruction-based approaches are less favoured, as they require careful tuning of the model's information bottleneck - such as the size of the latent dimension - to produce good results. In this work, we exploit the view of denoising diffusion probabilistic models (DDPM) as denoising autoencoders where the bottleneck is controlled externally, by means of the amount of noise applied. We propose to use DDPMs to reconstruct an input that has been noised to a range of noise levels, and use the resulting multi-dimensional reconstruction error to classify out-of-distribution inputs. Our approach outperforms not only reconstruction-based methods, but also state-of-the-art generative-based approaches.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
为了实现良好的性能和概括性,医疗图像分割模型应在具有足够可变性的大量数据集上进行培训。由于道德和治理限制以及与标签数据相关的成本,经常对科学发展进行扼杀,并经过对有限数据的培训和测试。数据增强通常用于人为地增加数据分布的可变性并提高模型的通用性。最近的作品探索了图像合成的深层生成模型,因为这种方法将使有效的无限数据生成多种多样的数据,从而解决了通用性和数据访问问题。但是,许多提出的解决方案限制了用户对生成内容的控制。在这项工作中,我们提出了Brainspade,该模型将基于合成扩散的标签发生器与语义图像发生器结合在一起。我们的模型可以在有或没有感兴趣的病理的情况下产生完全合成的大脑标签,然后产生任意引导样式的相应MRI图像。实验表明,Brainspade合成数据可用于训练分割模型,其性能与在真实数据中训练的模型相当。
translated by 谷歌翻译
深度神经网络在医学图像分析中带来了显着突破。但是,由于其渴望数据的性质,医学成像项目中适度的数据集大小可能会阻碍其全部潜力。生成合成数据提供了一种有希望的替代方案,可以补充培训数据集并进行更大范围的医学图像研究。最近,扩散模型通过产生逼真的合成图像引起了计算机视觉社区的注意。在这项研究中,我们使用潜在扩散模型探索从高分辨率3D脑图像中生成合成图像。我们使用来自英国生物银行数据集的T1W MRI图像(n = 31,740)来训练我们的模型,以了解脑图像的概率分布,该脑图像以协变量为基础,例如年龄,性别和大脑结构量。我们发现我们的模型创建了现实的数据,并且可以使用条件变量有效地控制数据生成。除此之外,我们创建了一个带有100,000次脑图像的合成数据集,并使科学界公开使用。
translated by 谷歌翻译
图像分割中使用的数据并不总是在同一网格上定义。对于医学图像,尤其如此,在这种医学图像中,分辨率,视野和方向在各个渠道和受试者之间可能会有所不同。因此,图像和标签通常被重新采样到同一网格上,作为预处理步骤。但是,重采样操作引入了部分体积效应和模糊,从而改变了有效的分辨率并减少了结构之间的对比度。在本文中,我们提出了一个SPLAT层,该层自动处理输入数据中的分辨率不匹配。该层将每个图像推向执行前向通行证的平均空间。由于SPLAT运算符是重采样运算符的伴随,因此可以将平均空间预测拉回到计算损耗函数的本机标签空间。因此,消除了使用插值进行明确分辨率调整的需求。我们在两个公开可用的数据集上显示,具有模拟和真实的多模式磁共振图像,该模型与重新采样相比作为预处理步骤而改善了分割结果。
translated by 谷歌翻译
深层生成模型已成为检测数据中任意异常的有前途的工具,并分配了手动标记的必要性。最近,自回旋变压器在医学成像中取得了最先进的性能。但是,这些模型仍然具有一些内在的弱点,例如需要将图像建模为1D序列,在采样过程中误差的积累以及与变压器相关的显着推理时间。去核扩散概率模型是一类非自动回旋生成模型,最近显示出可以在计算机视觉中产生出色的样品(超过生成的对抗网络),并实现与变压器具有竞争力同时具有快速推理时间的对数可能性。扩散模型可以应用于自动编码器学到的潜在表示,使其易于扩展,并适用于高维数据(例如医学图像)的出色候选者。在这里,我们提出了一种基于扩散模型的方法,以检测和分段脑成像中的异常。通过在健康数据上训练模型,然后探索其在马尔可夫链上的扩散和反向步骤,我们可以识别潜在空间中的异常区域,因此可以确定像素空间中的异常情况。我们的扩散模型与一系列具有2D CT和MRI数据的实验相比,具有竞争性能,涉及合成和实际病理病变,推理时间大大减少,从而使它们的用法在临床上可行。
translated by 谷歌翻译
自动生物医学图像分析的领域至关重要地取决于算法验证的可靠和有意义的性能指标。但是,当前的度量使用通常是不明智的,并且不能反映基本的域名。在这里,我们提出了一个全面的框架,该框架指导研究人员以问题意识的方式选择绩效指标。具体而言,我们专注于生物医学图像分析问题,这些问题可以解释为图像,对象或像素级别的分类任务。该框架首先编译域兴趣 - 目标结构 - ,数据集和算法与输出问题相关的属性的属性与问题指纹相关,同时还将其映射到适当的问题类别,即图像级分类,语义分段,实例,实例细分或对象检测。然后,它指导用户选择和应用一组适当的验证指标的过程,同时使他们意识到与个人选择相关的潜在陷阱。在本文中,我们描述了指标重新加载推荐框架的当前状态,目的是从图像分析社区获得建设性的反馈。当前版本是在由60多个图像分析专家的国际联盟中开发的,将在社区驱动的优化之后公开作为用户友好的工具包提供。
translated by 谷歌翻译
域适应(DA)最近在医学影像社区提出了强烈的兴趣。虽然已经提出了大量DA技术进行了用于图像分割,但大多数这些技术已经在私有数据集或小公共可用数据集上验证。此外,这些数据集主要解决了单级问题。为了解决这些限制,与第24届医学图像计算和计算机辅助干预(Miccai 2021)结合第24届国际会议组织交叉模态域适应(Crossmoda)挑战。 Crossmoda是无监督跨型号DA的第一个大型和多级基准。挑战的目标是分割参与前庭施瓦新瘤(VS)的后续和治疗规划的两个关键脑结构:VS和Cochleas。目前,使用对比度增强的T1(CET1)MRI进行VS患者的诊断和监测。然而,使用诸如高分辨率T2(HRT2)MRI的非对比度序列越来越感兴趣。因此,我们创建了一个无人监督的跨模型分段基准。训练集提供注释CET1(n = 105)和未配对的非注释的HRT2(n = 105)。目的是在测试集中提供的HRT2上自动对HRT2进行单侧VS和双侧耳蜗分割(n = 137)。共有16支球队提交了评估阶段的算法。顶级履行团队达成的表现水平非常高(最佳中位数骰子 - vs:88.4%; Cochleas:85.7%)并接近完全监督(中位数骰子 - vs:92.5%;耳蜗:87.7%)。所有顶级执行方法都使用图像到图像转换方法将源域图像转换为伪目标域图像。然后使用这些生成的图像和为源图像提供的手动注释进行培训分割网络。
translated by 谷歌翻译